The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In this report, we focus on reconstructing clothed humans in the canonical space given multiple views and poses of a human as the input. To achieve this, we utilize the geometric prior of the SMPLX model in the canonical space to learn the implicit representation for geometry reconstruction. Based on the observation that the topology between the posed mesh and the mesh in the canonical space are consistent, we propose to learn latent codes on the posed mesh by leveraging multiple input images and then assign the latent codes to the mesh in the canonical space. Specifically, we first leverage normal and geometry networks to extract the feature vector for each vertex on the SMPLX mesh. Normal maps are adopted for better generalization to unseen images compared to 2D images. Then, features for each vertex on the posed mesh from multiple images are integrated by MLPs. The integrated features acting as the latent code are anchored to the SMPLX mesh in the canonical space. Finally, latent code for each 3D point is extracted and utilized to calculate the SDF. Our work for reconstructing the human shape on canonical pose achieves 3rd performance on WCPA MVP-Human Body Challenge.
translated by 谷歌翻译
Adding perturbations via utilizing auxiliary gradient information or discarding existing details of the benign images are two common approaches for generating adversarial examples. Though visual imperceptibility is the desired property of adversarial examples, conventional adversarial attacks still generate traceable adversarial perturbations. In this paper, we introduce a novel Adversarial Attack via Invertible Neural Networks (AdvINN) method to produce robust and imperceptible adversarial examples. Specifically, AdvINN fully takes advantage of the information preservation property of Invertible Neural Networks and thereby generates adversarial examples by simultaneously adding class-specific semantic information of the target class and dropping discriminant information of the original class. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that the proposed AdvINN method can produce less imperceptible adversarial images than the state-of-the-art methods and AdvINN yields more robust adversarial examples with high confidence compared to other adversarial attacks.
translated by 谷歌翻译
本文调查了2D全身人类姿势估计的任务,该任务旨在将整个人体(包括身体,脚,脸部和手)局部定位在整个人体上。我们提出了一种称为Zoomnet的单网络方法,以考虑到完整人体的层次结构,并解决不同身体部位的规模变化。我们进一步提出了一个称为Zoomnas的神经体系结构搜索框架,以促进全身姿势估计的准确性和效率。Zoomnas共同搜索模型体系结构和不同子模块之间的连接,并自动为搜索的子模块分配计算复杂性。为了训练和评估Zoomnas,我们介绍了第一个大型2D人类全身数据集,即可可叶全体V1.0,它注释了133个用于野外图像的关键点。广泛的实验证明了Zoomnas的有效性和可可叶v1.0的重要性。
translated by 谷歌翻译
人类姿势估计旨在准确估计各种人类姿势。但是,现有的数据集通常遵循长尾巴的分布,而异常姿势仅占据一小部分,这进一步导致缺乏稀有姿势的多样性。这些问题导致当前姿势估计器的概括能力。在本文中,我们提出了一种简单而有效的数据增强方法,称为姿势转化(后部),以减轻上述问题。具体而言,我们建议姿势转化模块(PTM)创建具有多种姿势并采用姿势歧视者的新训练样本,以确保增强姿势的合理性。此外,我们提出姿势聚类模块(PCM)来测量姿势稀有性并选择“最稀有”姿势,以帮助平衡长尾分布。在三个基准数据集上进行的广泛实验证明了我们方法的有效性,尤其是在稀有姿势上。同样,我们的方法是有效且易于实施的,可以轻松地集成到现有姿势估计模型的训练管道中。
translated by 谷歌翻译
大规模数据集在面部生成/编辑的最新成功中扮演着必不可少的角色,并显着促进了新兴研究领域的进步。但是,学术界仍然缺乏具有不同面部属性注释的视频数据集,这对于与面部相关视频的研究至关重要。在这项工作中,我们提出了一个带有丰富面部属性注释的大规模,高质量和多样化的视频数据集,名为高质量的名人视频数据集(CelebV-HQ)。 Celebv-HQ至少包含35,666个视频剪辑,分辨率为512x512,涉及15,653个身份。所有剪辑均以83个面部属性手动标记,涵盖外观,动作和情感。我们对年龄,种族,亮度稳定性,运动平滑度,头部姿势多样性和数据质量进行全面分析,以证明CelebV-HQ的多样性和时间连贯性。此外,其多功能性和潜力在两个代表性任务(即无条件的视频生成和视频面部属性编辑)上得到了验证。此外,我们设想了Celebv-HQ的未来潜力,以及它将带来相关研究方向的新机会和挑战。数据,代码和模型公开可用。项目页面:https://celebv-hq.github.io。
translated by 谷歌翻译
从单个RGB图像中估算3D相互作用的手姿势对于理解人类行为至关重要。与大多数直接预测两只相互作用手的3D姿势的先前作品不同,我们建议分解具有挑战性的相互作用姿势估计任务并分别估算每只手的姿势。这样,就可以直接利用单手姿势估计系统的最新研究进度。然而,由于(1)严重的手部阻塞和(2)手的歧义性,手动姿势估计在相互作用的情况下非常具有挑战性。为了应对这两个挑战,我们提出了一种新型的手部划分和去除(HDR)框架,以执行手部斜切和脱离分散术的去除。我们还提出了第一个称为Amodal intredhand数据集(AIH)的大规模合成Amodal手数据集,以促进模型培训并促进相关研究的开发。实验表明,所提出的方法显着优于先前的最新相互作用姿势估计方法。代码和数据可在https://github.com/menghao666/hdr上找到。
translated by 谷歌翻译
2D姿势估计的现有作品主要集中在某个类别上,例如人,动物和车辆。但是,有许多应用程序方案需要检测看不见的对象类的姿势/关键点。在本文中,我们介绍了类别不稳定姿势估计(CAPE)的任务,该任务旨在创建一个姿势估计模型,能够检测仅给出一些具有关键点定义的样本的任何类别对象的姿势。为了实现这一目标,我们将姿势估计问题作为关键点匹配问题制定,并设计一个新颖的Cape框架,称为姿势匹配网络(POMNET)。提出了基于变压器的关键点交互模块(KIM),以捕获不同关键点之间的交互以及支持图像和查询图像之间的关系。我们还介绍了多类姿势(MP-100)数据集,该数据集是包含20K实例的100个对象类别的2D姿势数据集,并且经过精心设计用于开发CAPE算法。实验表明,我们的方法的表现优于其他基线方法。代码和数据可在https://github.com/luminxu/pose-for-venthing上找到。
translated by 谷歌翻译
很少有射击学习(FSL)旨在通过利用基本数据集的先验知识来识别只有几个支持样本的新奇查询。在本文中,我们考虑了FSL中的域移位问题,并旨在解决支持集和查询集之间的域间隙。不同于以前考虑基础和新颖类之间的域移位的跨域FSL工作(CD-FSL),新问题称为跨域跨集FSL(CDSC-FSL),不仅需要很少的学习者适应新的领域,但也要在每个新颖类中的不同领域之间保持一致。为此,我们提出了一种新颖的方法,即Stabpa,学习原型紧凑和跨域对准表示,以便可以同时解决域的转移和很少的学习学习。我们对分别从域和办公室数据集构建的两个新的CDCS-FSL基准进行评估。值得注意的是,我们的方法的表现优于多个详细的基线,例如,在域内,将5-shot精度提高了6.0点。代码可从https://github.com/wentaochen0813/cdcs-fsl获得
translated by 谷歌翻译
监督学习已被广​​泛用于攻击分类,需要高质量的数据和标签。但是,数据通常是不平衡的,很难获得足够的注释。此外,有监督的模型应遵守现实世界的部署问题,例如防御看不见的人造攻击。为了应对挑战,我们提出了一个半监督的细粒攻击分类框架,该框架由编码器和两个分支机构结构组成,并且该框架可以推广到不同的监督模型。具有残留连接的多层感知器用作提取特征并降低复杂性的编码器。提出了复发原型模块(RPM)以半监督的方式有效地训练编码器。为了减轻数据不平衡问题,我们将重量任务一致性(WTC)引入RPM的迭代过程中,通过将较大的权重分配给损失函数中较少样本的类别。此外,为了应对现实世界部署中的新攻击,我们提出了一种主动调整重新采样(AAR)方法,该方法可以更好地发现看不见的样本数据的分布并调整编码器的参数。实验结果表明,我们的模型优于最先进的半监督攻击检测方法,分类精度提高了3%,训练时间降低了90%。
translated by 谷歌翻译